32 research outputs found

    Detecting Ontological Conflicts in Protocols between Semantic Web Services

    Full text link
    The task of verifying the compatibility between interacting web services has traditionally been limited to checking the compatibility of the interaction protocol in terms of message sequences and the type of data being exchanged. Since web services are developed largely in an uncoordinated way, different services often use independently developed ontologies for the same domain instead of adhering to a single ontology as standard. In this work we investigate the approaches that can be taken by the server to verify the possibility to reach a state with semantically inconsistent results during the execution of a protocol with a client, if the client ontology is published. Often database is used to store the actual data along with the ontologies instead of storing the actual data as a part of the ontology description. It is important to observe that at the current state of the database the semantic conflict state may not be reached even if the verification done by the server indicates the possibility of reaching a conflict state. A relational algebra based decision procedure is also developed to incorporate the current state of the client and the server databases in the overall verification procedure

    A Multi-objective Perspective for Operator Scheduling using Fine-grained DVS Architecture

    Full text link
    The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for most available benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.Comment: 18 pages, 6 figures, International journal of VLSI design & Communication Systems (VLSICS

    A Hybrid Test Architecture to Reduce Test Application Time in Full Scan Sequential Circuits

    Get PDF
    Abstract—Full scan based design technique is widely used to alleviate the complexity of test generation for sequential circuits. However, this approach leads to substantial increase in test application time, because of serial loading of vectors. Although BIST based approaches offer faster testing, they usually suffer from low fault coverage. In this paper, we propose a hybrid test architecture, which achieves significant reduction in test application time. The test suite consists of: (i) some external deterministic test vectors to be scanned in, and (ii) internally generated responses of the CUT to be re-applied as tests iteratively, in functional (non-scan) mode. The proposed architecture uses only combinational ATPG to hybridize deterministic testing and test per clock BIST, and thus makes good use of both scan based and non-scan testing. We also present a bipartite graph based heuristic to select the deterministic test vectors and sequential fault simulation technique is used to perform the exact analysis on detected faults during the re-application of internally generated responses of the CUT during testing. Experimental results on ISCAS-89 benchmark circuits show the efficacy of the heuristic and reveal a significant reduction of test application time

    A formal method for detecting semantic conflicts in protocols between services with different ontologies

    No full text
    The protocol between a web service and its client may lead to semantically inconsistent results if the ontologies used by the server and client are different. Given that the web is growing in a mostly uncoordinated way, it is unrealistic to expect that web services will adhere to standardized ontologies in near future. In this paper we show that if the client publishes its ontology and presents the protocol it intends to follow with a web service, then the web server can perform a semantic verification step to determine formally whether any of the possible executions of the protocol may lead to a semantic conflict arising out of the differences in their ontologies. We believe that this an approach which enables a web-server to automatically verify the semantic compatibility of a client with the service it offers before it actually allows the client to execute the protocol. We model the ontologies as graphs and present a graph based search algorithm to determine whether the protocol can possibly reach a conflict state

    A Comparative NLP-Based Study on the Current Trends and Future Directions in COVID-19 Research

    No full text
    COVID-19 is a global health crisis that has altered human life and still promises to create ripples of death and destruction in its wake. The sea of scientific literature published over a short time-span to understand and mitigate this global phenomenon necessitates concerted efforts to organize our findings and focus on the unexplored facets of the disease. In this work, we applied natural language processing (NLP) based approaches on scientific literature published on COVID-19 to infer significant keywords that have contributed to our social, economic, demographic, psychological, epidemiological, clinical, and medical understanding of this pandemic. We identify key terms appearing in COVID literature that vary in representation when compared to other virus-borne diseases such as MERS, Ebola, and Influenza. We also identify countries, topics, and research articles that demonstrate that the scientific community is still reacting to the short-term threats such as transmissibility, health risks, treatment plans, and public policies, underpinning the need for collective international efforts towards long-term immunization and drug-related challenges. Furthermore, our study highlights several long-term research directions that are urgently needed for COVID-19 such as: global collaboration to create international open-access data repositories, policymaking to curb future outbreaks, psychological repercussions of COVID-19, vaccine development for SARS-CoV-2 variants and their long-term efficacy studies, and mental health issues in both children and elderly

    Abstraction refinement for state space partitioning based on auxiliary state machines

    No full text
    Counter-example guided abstraction refinement (CEGAR) techniques have been primarily used to scale the capacity of formal property verification. This paper explores the utility of CEGAR for verifying an emerging style of formal specifications, called AuxSM+properties, which consists of auxiliary state machines (AuxSMs) and formal properties based on the AuxSMs. A core challenge in formally verifying these specifications is in partitioning the states of the design-under-test (DUT) into sets which map into the different states of the AuxSM. In this paper we present a CEGAR approach for solving this problem without explicitly traversing the entire state space of the DUT

    Reliability annotations to formal specifications of context-sensitive safety properties in embedded systems

    No full text
    As the aspect of reliability is becoming increasingly important in the context of safety-critical embedded systems, developing formalism for specifying the reliability requirements for such systems has become very relevant. We present a formalism for modeling the reliability requirement succinctly for safety-critical embedded systems and propose the semantics over the task schedule of the embedded systems controller. We introduce the notion of reliability deficiency to represent the difference between the specified and the actual value of the reliability achieved by a schedule and present techniques to make up the reliability deficiency. The presented approach is primarily applicable to specify the reliability requirements of context-sensitive tasks executed by a real-time software system so that they can overcome transient failures using temporal redundancy, i.e., repetitive execution of the same task. We illustrate our formalism and the proposed techniques using suitable scenarios from the automotive domain

    Cohesive coverage management: simulation meets formal methods

    No full text
    It has been advocated by many experts in design verification that the key to successful verification convergence lies in developing the verification plan with adequate formal rigor. Traditionally, the verification plans for simulation and formal property verification (FPV) are developed in different ways, using different formalisms, and with different coverage goals. In this paper, we propose a framework where the difference between formal properties and simulation test points is diluted by using methods for translating one form of the specification to the other. This allows us to reuse simulation coverage to facilitate formal verification and to reuse proven formal properties to cover simulation test points. We also propose the use of inline assertions in procedural (possibly randomized) test benches, and show that it facilitates the use of hybrid verification techniques between simulation and bounded model checking. We propose the use of promising combinations of formal methods presented in our earlier papers to shape a hierarchical verification flow where simulation and formal methods aim to cover a common design intent specification. The proposed flow is demonstrated using a detailed case study of the ARM AMBA verification benchmark. We believe that the methods presented in this work will stimulate new thought processes and ultimately lead to wider adoption of cohesive coverage management techniques in the design intent validation flow

    Execution ordering in AND/OR graphs with failure probabilities

    No full text
    In this paper we consider finding solutions for problems represented using AND/OR graphs, which contain tasks that can fail when executed. In our setting each node represent an atomic task which is associated with a failure probability and a rollback penalty. This paper reports the following contributions - (a) an algorithm for finding the optimal ordering of the atomic tasks in a given solution graph which minimizes the expected penalty, (b) an algorithm for finding the optimal ordering in the presence of user defined ordering constraints, and (c) a counter example showing the lack of optimal substructure property for the problem of finding the solution graph having minimum expected penalty, and a pseudo-polynomial algorithm for finding the solution graph with minimum expected penalty
    corecore